33 research outputs found
Measurement of the very rare decay
The decay K+→π+νν¯
, with a very precisely predicted branching ratio of less than 10−10
,
is among the best processes to reveal indirect effects of new physics.
The NA62 experiment at CERN SPS is designed to study the K+→π+νν¯
decay and to measure its branching ratio using a decay-in-flight technique.
NA62 took data in 2016, 2017 and 2018, reaching the sensitivity of the Standard Model
for the K+→π+νν¯
decay by the analysis of the 2016 and 2017 data,
and providing the most precise measurement of the branching ratio to date
by the analysis of the 2018 data.
This measurement is also used to set limits on BR(K+→π+X
), where X
is a scalar
or pseudo-scalar particle.
The final result of the BR(K+→π+νν¯
) measurement and its interpretation in terms
of the K+→π+X
decay from the analysis of the full 2016-2018 data set is presented, and future plans and prospects are reviewed
PanDA for ATLAS Distributed Computing in the Next Decade
The Production and Distributed Analysis (PanDA) system has been developed to meet ATLAS production and analysis requirements for a data-driven workload management system capable of operating at the Large Hadron Collider (LHC) data processing scale. Heterogeneous resources used by the ATLAS experiment are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, dozens of scientific applications are supported, while data processing requires more than a few billion hours of computing usage per year. PanDA performed very well over the last decade including the LHC Run 1 data taking period. However, it was decided to upgrade the whole system concurrently with the LHC’s first long shutdown in order to cope with rapidly changing computing infrastructure. After two years of reengineering efforts, PanDA has embedded capabilities for fully dynamic and flexible workload management. The static batch job paradigm was discarded in favor of a more automated and scalable model. Workloads are dynamically tailored for optimal usage of resources, with the brokerage taking network traffic and forecasts into account. Computing resources are partitioned based on dynamic knowledge of their status and characteristics. The pilot has been re-factored around a plugin structure for easier development and deployment. Bookkeeping is handled with both coarse and fine granularities for efficient utilization of pledged or opportunistic resources. Leveraging direct remote data access and federated storage relaxes the geographical coupling between processing and data. An in-house security mechanism authenticates the pilot and data management services in off-grid environments such as volunteer computing and private local clusters. The PanDA monitor has been extensively optimized for performance and extended with analytics to provide aggregated summaries of the system as well as drill-down to operational details. There are as well many other challenges planned or recently implemented, and adoption by non-LHC experiments such as bioinformatics groups successfully running Paleomix (microbial genome and metagenomes) payload on supercomputers. In this talk we will focus on the new and planned features that are most important to the next decade of distributed computing workload management
PanDA: Exascale Federation of Resources for the ATLAS Experiment at the LHC
After a scheduled maintenance and upgrade period, the world’s largest and most powerful machine – the Large Hadron Collider(LHC) – is about to enter its second run at unprecedented energies. In order to exploit the scientific potential of the machine, the experiments at the LHC face computational challenges with enormous data volumes that need to be analysed by thousand of physics users and compared to simulated data. Given diverse funding constraints, the computational resources for the LHC have been deployed in a worldwide mesh of data centres, connected to each other through Grid technologies.
The PanDA (Production and Distributed Analysis) system was developed in 2005 for the ATLAS experiment on top of this heterogeneous infrastructure to seamlessly integrate the computational resources and give the users the feeling of a unique system. Since its origins, PanDA has evolved together with upcoming computing paradigms in and outside HEP, such as changes in the networking model, Cloud Computing and HPC. It is currently running steadily up to 200 thousand simultaneous cores (limited by the available resources for ATLAS), up to two million aggregated jobs per day and processes over an exabyte of data per year. The success of PanDA in ATLAS is triggering the widespread adoption and testing by other experiments. In this contribution we will give an overview of the PanDA components and focus on the new features and upcoming challenges that are relevant to the next decade of distributed computing workload management using PanDA
PanDA: Exascale Federation of Resources for the ATLAS Experiment at the LHC
After a scheduled maintenance and upgrade period, the world’s largest and most powerful machine - the Large Hadron Collider(LHC) - is about to enter its second run at unprecedented energies. In order to exploit the scientific potential of the machine, the experiments at the LHC face computational challenges with enormous data volumes that need to be analysed by thousand of physics users and compared to simulated data. Given diverse funding constraints, the computational resources for the LHC have been deployed in a worldwide mesh of data centres, connected to each other through Grid technologies. The PanDA (Production and Distributed Analysis) system was developed in 2005 for the ATLAS experiment on top of this heterogeneous infrastructure to seamlessly integrate the computational resources and give the users the feeling of a unique system. Since its origins, PanDA has evolved together with upcoming computing paradigms in and outside HEP, such as changes in the networking model, Cloud Computing and HPC. It is currently running steadily up to 200 thousand simultaneous cores (limited by the available resources for ATLAS), up to two million aggregated jobs per day and processes over an exabyte of data per year. The success of PanDA in ATLAS is triggering the widespread adoption and testing by other experiments. In this contribution we will give an overview of the PanDA components and focus on the new features and upcoming challenges that are relevant to the next decade of distributed computing workload management using PanDA
Operational Intelligence for Distributed Computing Systems for Exascale Science
In the near future, large scientific collaborations will face unprecedented computing challenges. Processing and storing exabyte datasets require a federated infrastructure of distributed computing resources. The current systems have proven to be mature and capable of meeting the experiment goals, by allowing timely delivery of scientific results. However, a substantial amount of interventions from software developers, shifters and operational teams is needed to efficiently manage such heterogeneous infrastructures. A wealth of operational data can be exploited to increase the level of automation in computing operations by using adequate techniques, such as machine learning (ML), tailored to solve specific problems. The Operational Intelligence project is a joint effort from various WLCG communities aimed at increasing the level of automation in computing operations. We discuss how state-of-the-art technologies can be used to build general solutions to common problems and to reduce the operational cost of the experiment computing infrastructure
Search for heavy neutral lepton production in decays
A search for heavy neutral lepton production in decays using a data sample collected with a minimum bias trigger by the NA62 experiment at CERN in 2015 is reported. Upper limits at the to level are established on the elements of the extended neutrino mixing matrix () for heavy neutral lepton mass in the range . This improves on the results from previous production searches in decays, setting more stringent limits and extending the mass range.A search for heavy neutral lepton production in K+ decays using a data sample collected with a minimum bias trigger by the NA62 experiment at CERN in 2015 is reported. Upper limits at the 10−7 to 10−6 level are established on the elements of the extended neutrino mixing matrix |Ue4|2 and |Uμ4|2 for heavy neutral lepton mass in the ranges 170–448 MeV/ c2 and 250–373 MeV/ c2 , respectively. This improves on the previous limits from HNL production searches over the whole mass range considered for |Ue4|2 , and above 300 MeV/ c2 for |Uμ4|2
First search for using the decay-in-flight technique
International audienceThe NA62 experiment at the CERN SPS reports the first search for K+→π+νν¯ using the decay-in-flight technique, based on a sample of 1.21×1011 K+ decays collected in 2016. The single event sensitivity is 3.15×10−10 , corresponding to 0.267 Standard Model events. One signal candidate is observed while the expected background is 0.152 events. This leads to an upper limit of 14×10−10 on the K+→π+νν¯ branching ratio at 95% CL
A search for the decay
A search for the decay, forbidden within the Standard Model by either lepton number or lepton flavour conservation depending on the flavour of the emitted neutrino, has been performed using the dataset collected by the NA62 experiment at CERN in 2016--2018. An upper limit of is obtained for the decay branching fraction at 90% CL, improving by a factor of 250 over the previous search
Searches for lepton number violating decays
Searches for lepton number violating and decays have been performed using the complete dataset collected by the NA62 experiment at CERN in 2016-2018. Upper limits of and are obtained on the decay branching fractions at 90% confidence level. The former result improves the limit by a factor of four over the previous best limit, while the latter result represents the first limit on the decay rate.Searches for lepton number violating K+→π−e+e+ and K+→π−π0e+e+ decays have been performed using the complete dataset collected by the NA62 experiment at CERN in 2016–2018. Upper limits of 5.3×10−11 and 8.5×10−10 are obtained on the decay branching fractions at 90% confidence level. The former result improves by a factor of four over the previous best limit, while the latter result represents the first limit on the K+→π−π0e+e+ decay rate.Searches for lepton number violating and decays have been performed using the complete dataset collected by the NA62 experiment at CERN in 2016-2018. Upper limits of and are obtained on the decay branching fractions at 90% confidence level. The former result improves by a factor of four over the previous best limit, while the latter result represents the first limit on the decay rate
Search for decays to a muon and invisible particles
The NA62 experiment at CERN reports searches for and decays, where and are massive invisible particles, using the 2016-2018 data set. The particle is assumed to be a heavy neutral lepton, and the results are expressed as upper limits of of the neutrino mixing parameter for masses in the range 200-384 MeV/ and lifetime exceeding 50 ns. The particle is considered a scalar or vector hidden sector mediator decaying to an invisible final state, and upper limits of the decay branching fraction for masses in the range 10-370 MeV/ are reported for the first time, ranging from to . An improved upper limit of is established at 90% CL on the branching fraction.The NA62 experiment at CERN reports searches for K+→μ+N and K+→μ+νX decays, where N and X are massive invisible particles, using the 2016–2018 data set. The N particle is assumed to be a heavy neutral lepton, and the results are expressed as upper limits of O(10−8) of the neutrino mixing parameter |Uμ4|2 for N masses in the range 200–384 MeV/ c2 and lifetime exceeding 50 ns. The X particle is considered a scalar or vector hidden sector mediator decaying to an invisible final state, and upper limits of the decay branching fraction for X masses in the range 10–370 MeV/ c2 are reported for the first time, ranging from O(10−5) to O(10−7) . An improved upper limit of 1.0×10−6 is established at 90% CL on the K+→μ+ννν¯ branching fraction.The NA62 experiment at CERN reports searches for and decays, where and are massive invisible particles, using the 2016-2018 data set. The particle is assumed to be a heavy neutral lepton, and the results are expressed as upper limits of of the neutrino mixing parameter for masses in the range 200-384 MeV/ and lifetime exceeding 50 ns. The particle is considered a scalar or vector hidden sector mediator decaying to an invisible final state, and upper limits of the decay branching fraction for masses in the range 10-370 MeV/ are reported for the first time, ranging from to . An improved upper limit of is established at 90% CL on the branching fraction